please check
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Asia > Singapore (0.04)
- Asia > China > Shanghai > Shanghai (0.04)
ModelFail was first introduced by Thomas and Brunskill [2016] to show the failure of model-based approach in the
We would like to thank the reviewers for appreciating our novel contributions on the algorithmic and theoretical front! We focus on clarifying our experimental results in this rebuttal. Please refer Section 5.1 (line 258-262, there is a typo in Line 262, " stands for "unobserved", is an observed variable that the policy needs to react upon). Also see Section C (line 567-575) in the supplement for more details. The time-invariant ModelWin and MountainCar we used in the paper are finite-horizon undiscounted MDPs.
Explanation-based Data Augmentation for Image Classification
All the datasets used in our paper are publicly available and are to be used for research purposes. Table 1 gives the download links and licenses of these datasets. Use is restricted to non-commercial research and educational purposesCUB-Families (2) https://github.com/HCPLab-SYSU/HS Use is restricted to non-commercial research and educational purposesTiny ImageNet http://cs231n.stanford.edu/tiny-imagenet-200.zip Use is restricted to non-commercial research and educational purposes Cardinal Cerulean Warbler Least Auklet Figure 1: Samples images for three classes of CUB dataset collected in (3). Abacus Arabian Camel Wooden Spoon Figure 2: Samples images for three classes of Tiny-ImageNet dataset collected using Flickr API.
- North America > United States > California > Santa Clara County > Palo Alto (0.24)
- Asia > Singapore (0.04)
- Asia > China > Shanghai > Shanghai (0.04)
Challenges and Best Practices in Corporate AI Governance:Lessons from the Biopharmaceutical Industry
Mökander, Jakob, Sheth, Margi, Gersbro-Sundler, Mimmi, Blomgren, Peder, Floridi, Luciano
While the use of artificial intelligence (AI) systems promises to bring significant economic and social benefits, it is also coupled with ethical, legal, and technical challenges. Business leaders thus face the question of how to best reap the benefits of automation whilst managing the associated risks. As a first step, many companies have committed themselves to various sets of ethics principles aimed at guiding the design and use of AI systems. So far so good. But how can well-intentioned ethical principles be translated into effective practice? And what challenges await companies that attempt to operationalize AI governance? In this article, we address these questions by drawing on our first-hand experience of shaping and driving the roll-out of AI governance within AstraZeneca, a biopharmaceutical company. The examples we discuss highlight challenges that any organization attempting to operationalize AI governance will have to face. These include questions concerning how to define the material scope of AI governance, how to harmonize standards across decentralized organizations, and how to measure the impact of specific AI governance initiatives. By showcasing how AstraZeneca managed these operational questions, we hope to provide project managers, CIOs, AI practitioners, and data privacy officers responsible for designing and implementing AI governance frameworks within other organizations with generalizable best practices. In essence, companies seeking to operationalize AI governance are encouraged to build on existing policies and governance structures, use pragmatic and action-oriented terminology, focus on risk management in development and procurement, and empower employees through continuous education and change management.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Europe > Italy > Emilia-Romagna > Metropolitan City of Bologna > Bologna (0.04)
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- (2 more...)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Pharmaceuticals & Biotechnology (1.00)
The US Algorithmic Accountability Act of 2022 vs. The EU Artificial Intelligence Act: What can they learn from each other?
Mokander, Jakob, Juneja, Prathm, Watson, David, Floridi, Luciano
On the whole, the U.S. Algorithmic Accountability Act of 2022 (US AAA) is a pragmatic approach to balancing the benefits and risks of automated decision systems. Yet there is still room for improvement. This commentary highlights how the US AAA can both inform and learn from the European Artificial Intelligence Act (EU AIA).
- North America > United States (1.00)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Europe > Italy > Emilia-Romagna > Metropolitan City of Bologna > Bologna (0.05)
- Law > Statutes (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
Self-Reflection Outcome is Sensitive to Prompt Construction
Liu, Fengyuan, AlDahoul, Nouar, Eady, Gregory, Zaki, Yasir, AlShebli, Bedoor, Rahwan, Talal
Large language models (LLMs) demonstrate impressive zero-shot and few-shot reasoning capabilities. Some propose that such capabilities can be improved through self-reflection, i.e., letting LLMs reflect on their own output to identify and correct mistakes in the initial responses. However, despite some evidence showing the benefits of self-reflection, recent studies offer mixed results. Here, we aim to reconcile these conflicting findings by first demonstrating that the outcome of self-reflection is sensitive to prompt wording; e.g., LLMs are more likely to conclude that it has made a mistake when explicitly prompted to find mistakes. Consequently, idiosyncrasies in reflection prompts may lead LLMs to change correct responses unnecessarily. We show that most prompts used in the self-reflection literature are prone to this bias. We then propose different ways of constructing prompts that are conservative in identifying mistakes and show that self-reflection using such prompts results in higher accuracy. Our findings highlight the importance of prompt engineering in self-reflection tasks. We release our code at https://github.com/Michael98Liu/mixture-of-prompts.
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- North America > United States > New York (0.04)
- Europe > Denmark > Capital Region > Copenhagen (0.04)
LineaPy Data Science Workflow In Just Two Lines: MLOps Made Easy
LineaPy is a python package used for data science automation. LineaPy is a Python package for capturing, analyzing, and automating data science workflows. At a high level, LineaPy traces the sequence of code execution to form a comprehensive understanding of the code and its context. This understanding allows LineaPy to provide a set of tools that help data scientists bring their work to production more quickly and easily, with just two lines of code. I saw their announcement last week about LineaPy.